Сайт Информационных Технологий

Measurement uncertainties beyond the ISO-Guide

Michael Grabe

Dr. Michael Grabe, 38104 Braunschweig, Am Hasselteich 5, Germany,

Phone and Fax + 49 531 371642; e-mail: m.grabe@aslan.bs.shuttle.de

Аннотация — В течение последних двадцати лет, параллельно с составлением и распространением “Руководства ISO для выражения неопределенности в измерениях”, была разработана альтернативная модель комбинирования и распространения случайных и систематических ошибок. Несмотря на то, что Руководство ISO сейчас используется во всем мире, с научной точки зрения его наиболее важным свойством является (хотя оно и не отвечает на этот вопрос) – всегда ли данная неопределенность покрывает истинное значение физической величины или нет? В отличие от этого, альтернативная модель вычисления ошибки, которая будет кратко описана далее, всегда выделяет, что для нее весьма естественно, малейшую неопределенность, покрывающую истинное значение физической величины, взятой с разумной определенностью. Это возможно благодаря полному пересмотру традиционных формализмов вычисления ошибки. Как будет показано, новые формализмы решают “неразрешимые” до сих пор задачи, жестко отстраняют процедуры, которые до сих пор считались безусловно принимаемыми, и, наконец, вводят новые идеи в вычисление ошибок.

Main part

Physicists are used to express the uncertainties of their measurements according to, I am afraid to say, historically grown procedures being inconsistent and untidy. Unfortunately, these inappropriate procedures have become the object of an internationally agreed recommandation, the so-called "ISO-Guide to the Expression of Uncertainties in Mesaurement", [1].

The contribution at issue outlines alternative formalisms of error propagation and uncertainty estimation. Yielding reliable, self consistent uncertainties, the new formalisms claim nothing less than to guide the natural sciences back to objectivity.

As analytical inspections and computer simulations reveal, the ISO-Guide leads to ill-defined uncertainties, heavily calling in question the objectivity of the natural sciences. The main reproach against the ISO-Guide aimes at its inability to state, whether a given uncertainty covers the true value of the physical quantity of interest or not. However, if we are not sure, whether the true value is covered or not, we are unable to express any scientific statement at all about that what we have done. On the highest pysical level, we want to know: Do neutrinos possess a non-zero mass? What are the true values of the so-called fundamental physical constants? On the other hand, in our daily work, we clearly have to know: Is the calibration of a given measuring device really reliable?

The idea to revise the procedures to estimate measurement uncertainties is based on a set of five fundamental equations of error calculus, tying up the most basic quantities of measurement: the measured value x, the true value x, the random error the unknown systematic error f, and the arithmetic mean . As has been shown in [4], these equation should read as follows

, (i = 1,...,n) (1)

While the random errors appear in the course of the measurements, the unknown systematic error f is generated, before the measurements start, before any of the x has been taken up. The bias or unknown systematic error f is to be attributed to imperfect adjustments, to biases of detectors, to boundary conditions, to environmental influences etc. In any case, the experimenter is not in a position to control and to eliminate such time-constant perturbations by simply subtracting them as they remain unknown with respect to magnitude and sign! This is the situation physicists have to live with and how metrology works.To repeat, all we know about f is f = const. and

, (2)

While repeated measurents are apt to reduce the influence of random errors, they leave the bias f unaltered!

In contrast to traditional error calculus, here any arithmetic mean is always biased relative to the true value x, where the difference between the expectation and x is implied within

(f = const.)(3)

assuming a symmetrical distribution of the random errors - for practical reasons we shall go on a step further and attribute a normal distribution to them.

The author criticizes the worldwide practice to postulate a distribution density for f. Though nothing is known or ever knowable about such a distribution, a rectangular distribution density has been introduced - though any other distribution defined by postulate would have been equally factitious. Be that as it may, any symmetrical distribution removes the expectation of f, leading, quite in contradiction to physical reality, to. There are many reasons why a postulated density for f leads to difficulties, [5].

Combining (1) and (3) relates the scattering of the random errors to the expectation of ,

; (i = 1,...,n) (4)

In order to avoid writing down repeatedly the sum sign in the expression for the arithemtic mean , we introduce the identity

. (5)

These error equations will be the basis of all further consideratons. This is the reason, why they should be called the fundamental error equations of metrology.

To make the points to follow up as clear as possible:

The aim is to define the smallest measurement uncertainties covering the true values of the physical quantities in question with (a reasonable amount of) certainty.

The ISO-Guide does not raise such a claim.

In regard to error propagation within a given function (x,y), we only admit the same number of repeated measurements for x and y, so that we are in a position to introduce a compound probability density . Without pairs , no distribution density! We are very sure, is not a physical demand, as any physical law is, of course, quite independent of the number of repeated measurements - in particular the result of a measurement may not depend on two or three repeated measurements more or less. Introducing a compound probability density , we do not have to expect any more difficulties at all, as then there are appropriate statistical procedures.

Associating a compound probability density to x and y, the only thing we have to do is to care for .The claim stands in sharp contrast to traditional error propagation, explicitly admitting , thereby running into tremendous difficulties in regard to e.g. the problem of how to define confidence levels and confidence intervals. In what follows we shall always assume arithmetic means to involving the same number of repeated measurements.

Given a function and measurements

, ; (l = 1, ..., n).

the overall uncertainty of is given by, [5],

+ (6)

In order to test this uncertainty against that which is specified by the ISO-Guide,

we have to refer to computer simulations. The empirical factor allows to enlarge or the skrink the uncertainty to any desired degree. It turns out that the ISO-uncertainty is unacceptably small, should . One could of course universally agree on, say , but why 2 and not 2.5? As there is no objective criterion for , we never know whether any measured result might be useful or not. In this property, the ISO uncertainty violates the basic principles of the natural sciences - its objectivity.

In contrast to this, we arrived at:

The uncertainty, as defined in (6), specifies the smallest uncertainty that covers the true value with (a reasonable) certainty.

Clearly, this is what we want to have.

Let us move a step further and consider a least-squares adjustement of r parameters

,(i = 1, ..., m)

(7)

where the right sides are given by

;(i = 1, ..., m)

As the are biased, the Gaub -Markov-Theorem breaks down. The question is: Can we nevertheless optimize the adjustment?“ The answer is yes, but we have to select the weights empirically: Varying the weights by trial and error, we finally arrive at minimal uncertainties of the estimators . This procedure works quite well and additionally reveals whether a given input datum is discrepant, [7].

Transferring the above mechanism of error propagation, we are in a position to quote overall uncertainties for the adjusted parameters , including random and systematic components. More than that, the new formalisms allow to formalize the couplings between the estimators , those which refer to random errors and those which are due to unknown systematic errors. Couplings of the last kind give rise to so-called security polytopes hitherto unknown in error propagation, [6, 7].

From that what has been said before, there is no need to stress that unknown systematic errors cause a break down of the analysis of variance. To make the point as clear as possible: There is definitely no analysis of variance in regard to any kind of empirical data - should that be acknowledged or not, this is the fact.

Nevertheless, we may use the new formalisms to pairwise compare a given set of arithmetic means. In the simplest case, for the two means and , we get for

Any difference , satisfying this relation, implies compatible means and .

References

[1] ISO, Guide to the Expression of Uncertainty in Measurement, 1993. 1, Rue de Varambe, Case postale 56, CH-1211 Geneve 20, Switzereland.

The Guide ist based on the recommendation 1 (CI-1981) of the CIPM and on the recommendation INC-1(1980) of the „Working Group on the Statement of Uncertainties“ of the BIPM. It has has been signed by the IEF, IFCC, IUPAC, IUPAP and the OIML.

[2] Eisenhart, C., The Reliability of Measured Values- Part I, Fundamental Concepts; Photo-

grammetric Engineering, 18 (1952) 543-561

[3] Eisenhart, C. in H.H. Ku (Ed): Precision Measurment and Calibration, NBS Special

Publication 300, Vol 1. (US Government Printing Office), Washington D.C., 1969

[4] Grabe, M., Principles of „Metrological Statistics“, Metrologia 23 (1986/87) 213-219

[5] Grabe, M. Towards a New Standard for the Assignment of Measurment Uncertainties,

National Conference of Standard Laboratories, 31. July-4. August 1994, Chicago

[6] Grabe, M., Uncertainties, Confidence Ellipsoids and Security Polytopes in LSA, Physics Letters A 165 (1992) 124-132 - Erratum A 205 (1995) 425

[7] Grabe, M., An Alternative Algorithm for Adjusting the Fundamental Physical Constants,

Physics Letters A 213 (1996) 125-137


Site of Information Technologies
Designed by  inftech@webservis.ru.